They’re pretty big! This image is from The Geography of Transport Systems and shows how container ships have become pretty large!

The latest generation of Ultra Large Container Ships (ULCS) can carry more than 15,000 TEUs, or twenty-foot equivalent units, the measure that shipping companies use to compare volume. The Ever Given has a capacity of 20,124 TEU. That’s enough space for 745 million bananas!

Source: Wikipedia

The median ship has gotten longer and wider over time. This plot shows the size of the average cargo ship by decade.

The Ever Given is 400m long and 58m wide at her widest point. This is the equivalent area of nearly 4 football fields! And to think, she has only 25 crew on her!

The increase in size has been in length, beam (width at widest point), and deadweight tonnage. Have a look at how the rate of increase in deadweight tonnage has picked up since 2000.

Wow! That’s a steep increase in the amount of stuff these cargo ships can carry! The Ever Given, highlighted here with a midnight blue diamond is among the largest cargo ships today. Launched in 2018, she has a deadweight tonnage capacity of nearly 220,000 tonnes.

Trying out the fenced divs for xarigan extra

Tab One

Amet enim aptent molestie vulputate pharetra vulputate primis et vivamus semper.

Tab Two

Sub heading one

Sit etiam malesuada arcu fusce ullamcorper interdum proin tincidunt curabitur felis?

Tab Three

Adipiscing mauris egestas vitae pretium ad dignissim dictumst platea!

There appears to be a bifurcation in cargo ship size – a trend towards many small cargo ships and relatively fewer very, very big ones.

Interestingly, it looks like there is a growing bifurcation in the distribution. In other words, though the maximum size is increasing, there is a growing gap between the many smaller ships at the bottom of the distribution and the few large ones at the top. Wow! Look at those economies of scale kicking in!

How does the Ever Given compare to other large vessels?

We have seen that the Ever Given is a big container ship, but what about other types of ships? Cruise liners are pretty big, despite not being as long, have more space enclosed within the ship, known as gross tonnage. Gross tonnage, as opposed to deadweight tonnage, measures volume rather than mass.

It is interesting to see that the container ships all have relatively low drafts compared to the supertankers. Cruise ships have the smallest drafts, given that the must navigate into many different harbours around the world, some of which are not suited to larger ships such as super tankers.

Data

The data for this project is sourced from a number of places. The scraping code is up on my Github reposiory, accessible here

Below I show the code for scraping one website, Vessel Finder.

We create a list of pages to scrape, a function that grabs the data, and then iterate through each page to get the data and store it.

Click here to see how we make the list of pages

The format of the url is a stub, page number, and type spcification for container ships “type=401”.

# list of urls to scrape
list_of_pages <- str_c("https://www.vesselfinder.com/vessels?page=", 
                       1:200, 
                       "&type=401") %>% 
  as_tibble() %>% 
  rename(url = value)

Now we need a function to grab all of the data from each page

Click here to see how we collect the data from the page

get_data_vessel_finder <- function(url){
  # store the html from the page
  html <- read_html(url)
  
  message(glue("Getting ad from {url}"))
  
  table <- html %>% 
  html_nodes(".results") %>% 
  html_table()

  table <- table[[1]]

  page <- table %>% 
    as_tibble(.name_repair = "unique") %>% 
    select(-Vessel...1) %>% 
    mutate(Vessel...2 = str_squish(Vessel...2)) %>% 
    rename(name = Vessel...2) %>% 
    mutate(name = str_remove_all(name, " Container Ship"),
           name = str_to_title(name))
  
  page
  
}

Now we put them together and iterate through each url

Click here to see how we iterate

df <- list_of_pages %>%
    # the possibly will tell us if a page has failed, nice!
    mutate(data = map(url, possibly(get_data_vessel_finder, "failed")))

df <- df %>% 
  unnest(data) %>% 
  janitor::clean_names() %>% 
  extract(size_m, c("length", "width"), "^([0-9]+)(.*)") %>% 
  mutate(across(length:width, parse_number)) %>% 
  rename(launched = built)

write_rds(df, "data/vessel_finder_data.rds")

This is now what the scraped data looks like: